医学图像分割的深度学习模型可能会出乎意料地且出乎意料地失败,而与训练图像相比,在不同中心获得的病理案例和图像,标签错误违反了专家知识。此类错误破坏了对医学图像细分的深度学习模型的可信赖性。检测和纠正此类故障的机制对于将该技术安全地转化为诊所至关重要,并且可能是对未来人工智能法规(AI)的要求。在这项工作中,我们提出了一个值得信赖的AI理论框架和一个实用系统,该系统可以使用后备方法和基于Dempster-Shafer理论的失败机制增强任何骨干AI系统。我们的方法依赖于可信赖的AI的可行定义。我们的方法会自动放弃由骨干AI预测的体素级标签,该标签违反了专家知识,并依赖于这些体素的后备。我们证明了拟议的值得信赖的AI方法在最大的报告的胎儿MRI的注释数据集中,由13个中心的540个手动注释的胎儿脑3D T2W MRI组成。我们值得信赖的AI方法改善了在各个中心获得的胎儿脑MRI和各种脑异常的胎儿的最先进的主链AI的鲁棒性。
translated by 谷歌翻译
本文介绍了我们参与FETA挑战2021的方法(团队名称:特拉比特)。认为医学图像分割的卷积神经网络的性能被认为与训练数据的数量正相关。 FETA挑战不会限制参与者仅使用提供的培训数据,还可以使用其他公共可用的来源。然而,开放式胎儿脑数据仍然有限。因此,有利的策略可以扩展训练数据以覆盖更广泛的围产期脑成像来源。除了敌人挑战数据之外,围产期脑部MRIS,目前可公开可用,跨越正常和病理胎儿地图空间以及新生儿扫描。然而,在不同数据集中分段的围产期脑MRIS通常具有不同的注释协议。这使得将这些数据集结合起来训练深度神经网络的挑战。我们最近提出了一系列损失职能,标签集丢失功能,用于部分监督学习。标签集丢失功能允许使用部分分段图像培训深度神经网络,即某些类可以将某些类分为超级类别。我们建议使用标签集丢失功能来通过合并几个公共数据集来改善多级胎儿脑细分的最先进的深度学习管道的分割性能。为了促进可延流性,我们的方法不会引入任何额外的超参数调整。
translated by 谷歌翻译
限制机器学习系统的故障对于安全至关重要的应用至关重要。为了提高机器学习系统的鲁棒性,已提出了分配鲁棒优化(DRO)作为经验风险最小化(ERM)的概括。然而,由于与ERM的随机梯度下降(SGD)优化器相比,由于可用于DRO的优化器的相对效率相对效率相对低效率,因此在深度学习中的使用受到了严格的限制。我们建议使用硬度加权采样的SGD,这是机器学习中DRO的原则性高效优化方法,在深度学习的背景下特别适合。与实践中的硬示例挖掘策略类似,所提出的算法可以直接实施和计算,并且与用于深度学习的基于SGD的优化器一样有效,需要最小的开销计算。与典型的临时硬采矿方法相反,我们证明了我们的DRO算法的收敛性,用于过度参数化的深度学习网络,并具有RELU激活以及有限数量的层和参数。我们对MRI中胎儿脑3D MRI分割和脑肿瘤分割的实验证明了我们方法的可行性和有用性。使用我们的硬度加权采样进行训练,最先进的深度学习管道可改善自动胎儿脑中解剖学变异的鲁棒性3D MRI分割,并改善了对脑肿瘤分割的图像方案变化的鲁棒性。我们的代码可从https://github.com/lucasfidon/hardnessweightedsampler获得。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译